This dissertation presents several new methods of supervised and unsupervisedlearning of word sense disambiguation models. The supervised methods focus onperforming model searches through a space of probabilistic models, and theunsupervised methods rely on the use of Gibbs Sampling and the ExpectationMaximization (EM) algorithm. In both the supervised and unsupervised case, theNaive Bayesian model is found to perform well. An explanation for this successis presented in terms of learning rates and bias-variance decompositions.
展开▼